skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ferrari, Silvia"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Gonzalez, D. (Ed.)
    Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world. 
    more » « less
  2. Traditional models of motor control typically operate in the domain of continuous signals such as spike rates, forces, and kinematics. However, there is growing evidence that precise spike timings encode significant information that coordinates and causally influences motor control. Some existing neural network models incorporate spike timing precision but they neither predict motor spikes coordinated across multiple motor units nor capture sensory-driven modulation of agile locomotor control. In this paper, we propose a visual encoder and model of a sensorimotor system based on a recurrent neural network (RNN) that utilizes spike timing encoding during smooth pursuit target tracking. We use this to predict a nearly complete, spike-resolved motor program of a hawkmoth that requires coordinated millisecond precision across 10 major flight motor units. Each motor unit enervates one muscle and utilizes both rate and timing encoding. Our model includes a motion detection mechanism inspired by the hawkmoth's compound eye, a convolutional encoder that compresses the sensory input, and a simple RNN that is sufficient to sequentially predict wingstroke-to-wingstroke modulation in millisecond-precise spike timings. The two-layer output architecture of the RNN separately predicts the occurrence and timing of each spike in the motor program. The dataset includes spikes recorded from all motor units during a tethered flight where the hawkmoth attends to a moving robotic flower, with a total of roughly 7000 wingstrokes from 16 trials on 5 hawkmoth subjects. Intra-trial and same-subject inter-trial predictions on the test data show that nearly every spike can be predicted within 2 ms of its known spike timing precision values. Whereas, spike occurrence prediction accuracy is about 90%. Overall, our model can predict the precise spike timing of a nearly complete motor program for hawkmoth flight with a precision comparable to that seen in agile flying insects. Such an encoding framework that captures visually-modulated precise spike timing codes and coordination can reveal how organisms process visual cues for agile movements. It can also drive the next generation of neuromorphic controllers for navigation in complex environments. 
    more » « less
  3. Spike train decoding is considered one of the grand challenges in reverse-engineering neural control systems as well as in the development of neuromorphic controllers. This paper presents a novel relative-time kernel design that accounts for not only individual spike train patterns, but also the relative spike timing between neuron pairs in the population. The new relative-time-kernel-based spike train decoding method proposed in this paper allows us to map the spike trains of a population of neurons onto a lower-dimensional manifold, in which continuous-time trajectories live. The effectiveness of our novel approach is demonstrated by comparing it with existing kernel-based and rate-based decoders, including the traditional reproducing kernel Hilbert space framework. In this paper, we use the data collected in hawk moth flower tracking experiments to test the importance of relative spike timing information for neural control, and focus on the problem of uncovering the mapping from the spike trains of ten primary flight muscles to the resulting forces and torques on the moth body. We show that our new relative-time-kernel-based decoder improves the prediction of the resulting forces and torques by up to 52.1 %. Our proposed relative-time-kernel-based decoder may be used to reverse-engineer neural control systems more accurately by incorporating precise relative spike timing information in spike trains. 
    more » « less
  4. null (Ed.)